Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification

نویسندگان

چکیده

A well-designed loss function can effectively improve the characterization ability of network features without increasing amount calculation in model inference stage, and has become focus attention recent research. Given that existing lightweight adds a to last layer, which severely attenuates gradient during backpropagation process, we propose hierarchical polynomial kernel prototype this study. In function, addition multiple stages deep neural enhances efficiency return, only multi-layer functions training stage stage. addition, good non-linear expression improves characteristic performance network. Verification on public datasets shows trained with proposed higher accuracy than other functions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incrementally Learning the Hierarchical Softmax Function for Neural Language Models

Neural network language models (NNLMs) have attracted a lot of attention recently. In this paper, we present a training method that can incrementally train the hierarchical softmax function for NNMLs. We split the cost function to model old and update corpora separately, and factorize the objective function for the hierarchical softmax. Then we provide a new stochastic gradient based method to ...

متن کامل

Hierarchical loss for classification

Failing to distinguish between a sheepdog and a skyscraper should be worse and penalized more than failing to distinguish between a sheepdog and a poodle; after all, sheepdogs and poodles are both breeds of dogs. However, existing metrics of failure (so-called “loss” or “win”) used in textual or visual classification/recognition via neural networks seldom view a sheepdog as more similar to a po...

متن کامل

Self-organized Hierarchical Softmax

We propose a new self-organizing hierarchical softmax formulation for neural-network-based language models over large vocabularies. Instead of using a predefined hierarchical structure, our approach is capable of learning word clusters with clear syntactical and semantic meaning during the language model training process. We provide experiments on standard benchmarks for language modeling and s...

متن کامل

Classification Loss Function for Parameter Ensembles in Bayesian Hierarchical Models

Parameter ensembles or sets of point estimates constitute one of the cornerstones of modern statistical practice. This is especially the case in Bayesian hierarchical models, where different decision-theoretic frameworks can be deployed to summarize such parameter ensembles. The estimation of these parameter ensembles may thus substantially vary depending on which inferential goals are prioriti...

متن کامل

Leaf-Smoothed Hierarchical Softmax for Ordinal Prediction

We propose a new approach to conditional probability estimation for ordinal labels. First, we present a specialized hierarchical softmax variant inspired by k-d trees that leverages the inherent spatial structure of (potentially-multivariate) ordinal labels. We then adapt ideas from signal processing on noisy graphs to develop a novel regularizer for such hierarchical softmax models. Both our t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied sciences

سال: 2022

ISSN: ['2076-3417']

DOI: https://doi.org/10.3390/app122010336